81 research outputs found

    Relative fundamental frequency during vocal onset and offset in older speakers with and without Parkinson's disease a)

    Get PDF
    The relative fundamental frequency (RFF) surrounding production of a voiceless consonant has previously been shown to be lower in speakers with hypokinetic dysarthria and Parkinson's disease (PD) relative to age/sex matched controls. Here RFF was calculated in 32 speakers with PD without overt hypokinetic dysarthria and 32 age and sex matched controls to better understand the relationships between RFF and PD progression, medication status, and sex. Results showed that RFF was statistically significantly lower in individuals with PD compared with healthy age-matched controls and was statistically significantly lower in individuals diagnosed at least 5 yrs prior to experimentation relative to individuals recorded less than 5 yrs past diagnosis. Contrary to previous trends, no effect of medication was found. However, a statistically significant effect of sex on offset RFF was shown, with lower values in males relative to females. Future work examining the physiological bases of RFF is warranted

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    LaDIVA: A neurocomputational model providing laryngeal motor control for speech acquisition and production

    Get PDF
    Many voice disorders are the result of intricate neural and/or biomechanical impairments that are poorly understood. The limited knowledge of their etiological and pathophysiological mechanisms hampers effective clinical management. Behavioral studies have been used concurrently with computational models to better understand typical and pathological laryngeal motor control. Thus far, however, a unified computational framework that quantitatively integrates physiologically relevant models of phonation with the neural control of speech has not been developed. Here, we introduce LaDIVA, a novel neurocomputational model with physiologically based laryngeal motor control. We combined the DIVA model (an established neural network model of speech motor control) with the extended body-cover model (a physics-based vocal fold model). The resulting integrated model, LaDIVA, was validated by comparing its model simulations with behavioral responses to perturbations of auditory vocal fundamental frequency (fo) feedback in adults with typical speech. LaDIVA demonstrated capability to simulate different modes of laryngeal motor control, ranging from short-term (i.e., reflexive) and long-term (i.e., adaptive) auditory feedback paradigms, to generating prosodic contours in speech. Simulations showed that LaDIVA’s laryngeal motor control displays properties of motor equivalence, i.e., LaDIVA could robustly generate compensatory responses to reflexive vocal fo perturbations with varying initial laryngeal muscle activation levels leading to the same output. The model can also generate prosodic contours for studying laryngeal motor control in running speech. LaDIVA can expand the understanding of the physiology of human phonation to enable, for the first time, the investigation of causal effects of neural motor control in the fine structure of the vocal signal.Fil: Weerathunge, Hasini R.. Boston University; Estados UnidosFil: Alzamendi, Gabriel Alejandro. Universidad Nacional de Entre Ríos. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática; ArgentinaFil: Cler, Gabriel J.. University of Washington; Estados UnidosFil: Guenther, Frank H.. Boston University; Estados UnidosFil: Stepp, Cara E.. Boston University; Estados UnidosFil: Zañartu, Matías. Universidad Técnica Federico Santa María; Chil

    Repeated Training with Augmentative Vibrotactile Feedback Increases Object Manipulation Performance

    Get PDF
    Most users of prosthetic hands must rely on visual feedback alone, which requires visual attention and cognitive resources. Providing haptic feedback of variables relevant to manipulation, such as contact force, may thus improve the usability of prosthetic hands for tasks of daily living. Vibrotactile stimulation was explored as a feedback modality in ten unimpaired participants across eight sessions in a two-week period. Participants used their right index finger to perform a virtual object manipulation task with both visual and augmentative vibrotactile feedback related to force. Through repeated training, participants were able to learn to use the vibrotactile feedback to significantly improve object manipulation. Removal of vibrotactile feedback in session 8 significantly reduced task performance. These results suggest that vibrotactile feedback paired with training may enhance the manipulation ability of prosthetic hand users without the need for more invasive strategies

    Implementation and Characterization of Vibrotactile Interfaces

    Get PDF
    While a standard approach is more or less established for rendering basic vibratory cues in consumer electronics, the implementation of advanced vibrotactile feedback still requires designers and engineers to solve a number of technical issues. Several off-the-shelf vibration actuators are currently available, having different characteristics and limitations that should be considered in the design process. We suggest an iterative approach to design in which vibrotactile interfaces are validated by testing their accuracy in rendering vibratory cues and in measuring input gestures. Several examples of prototype interfaces yielding audio-haptic feedback are described, ranging from open-ended devices to musical interfaces, addressing their design and the characterization of their vibratory output

    Acoustics of the Human Middle-Ear Air Space

    Get PDF
    The impedance of the middle-ear air space was measured on three human cadaver ears with complete mastoid air-cell systems. Below 500 Hz, the impedance is approximately compliance-like, and at higher frequencies (500-6000 Hz) the impedance magnitude has several (five to nine) extrema. Mechanisms for these extrema are identified and described through circuit models of the middle-ear air space. The measurements demonstrate that the middle-ear air space impedance can affect the middle-ear impedance at the tympanic membrane by as much as 10 dB at frequencies greater than 1000 Hz. Thus, variations in the middle-ear air space impedance that result from variations in anatomy of the middle-ear air space can contribute to inter-ear variations in both impedance measurements and otoacoustic emissions, when measured at the tympanic membrane

    Effects of augmentative visual training on audio-motor mapping

    No full text
    a b s t r a c t The purpose of this study was to determine the effect of augmentative visual feedback training on auditory-motor performance. Thirty-two healthy young participants used facial surface electromyography (sEMG) to control a human-machine interface (HMI) for which the output was vowel synthesis. An auditory-only (AO) group (n = 16) trained with auditory feedback alone and an auditory-visual (AV) group (n = 16) trained with auditory feedback and progressively-removed visual feedback. Subjects participated in three training sessions and one testing session over 3 days. During the testing session they were given novel targets to test auditory-motor generalization. We hypothesized that the auditory-visual group would perform better on the novel set of targets than the group that trained with auditory feedback only. Analysis of variance on the percentage of total targets reached indicated a significant interaction between group and session: individuals in the AV group performed significantly better than those in the AO group during early training sessions (while using visual feedback), but no difference was seen between the two groups during later sessions. Results suggest that augmentative visual feedback during training does not improve auditory-motor performance
    • …
    corecore